- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0003000000000000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Barnett, Alina J. (2)
-
Chen, Chaofan (2)
-
Lo, Joseph Y. (2)
-
Rudin, Cynthia (2)
-
Choi, Nuri (1)
-
Fang, Jerry D. (1)
-
Gajjar, Neel (1)
-
Grimm, Lars (1)
-
Jha, Abhinav K (1)
-
Mitra, Anika (1)
-
Ou, Yanchen Jessie (1)
-
Rahman, Md Ashequr (1)
-
Schwartz, Fides (1)
-
Schwartz, Fides R. (1)
-
Sharma, Vaibhav (1)
-
Siegel, Barry A (1)
-
Yu, Zitong (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
- Filter by Editor
-
-
Chen, Yan (2)
-
Mello-Thoms, Claudia R. (2)
-
Mello-Thoms, Claudia R (1)
-
Taylor-Phillips, Sian (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Ou, Yanchen Jessie; Barnett, Alina J.; Mitra, Anika; Schwartz, Fides R.; Chen, Chaofan; Grimm, Lars; Lo, Joseph Y.; Rudin, Cynthia (, Medical Imaging 2023: Image Perception, Observer Performance, and Technology Assessment)Chen, Yan; Mello-Thoms, Claudia R. (Ed.)Tools for computer-aided diagnosis based on deep learning have become increasingly important in the medical field. Such tools can be useful, but require effective communication of their decision-making process in order to safely and meaningfully guide clinical decisions. We present a user interface that incorporates the IAIA-BL model, which interpretably predicts both mass margin and malignancy for breast lesions. The user interface displays the most relevant aspects of the model’s explanation including the predicted margin value, the AI confidence in the prediction, and the two most highly activated prototypes for each case. In addition, this user interface includes full-field and cropped images of the region of interest, as well as a questionnaire suitable for a reader study. Our preliminary results indicate that the model increases the readers’ confidence and accuracy in their decisions on margin and malignancy.more » « less
-
Barnett, Alina J.; Sharma, Vaibhav; Gajjar, Neel; Fang, Jerry D.; Schwartz, Fides; Chen, Chaofan; Lo, Joseph Y.; Rudin, Cynthia (, Proc. SPIE 12035, Medical Imaging 2022: Image Perception, Observer Performance, and Technology Assessment,)Mello-Thoms, Claudia R.; Taylor-Phillips, Sian (Ed.)There is increasing interest in using deep learning and computer vision to help guide clinical decisions, such as whether to order a biopsy based on a mammogram. Existing networks are typically black box, unable to explain how they make their predictions. We present an interpretable deep-learning network which explains its predictions in terms of BI-RADS features mass shape and mass margin. Our model predicts mass margin and mass shape, then uses the logits from those interpretable models to predict malignancy, also using an interpretable model. The interpretable mass margin model explains its predictions using a prototypical parts model. The interpretable mass shape model predicts segmentations, fits an ellipse, then determines shape based on the goodness of fit and eccentricity of the fitted ellipse. While including mass shape logits in the malignancy prediction model did not improve performance, we present this technique as part of a framework for better clinician-AI communication.more » « less
An official website of the United States government
